neurips community
General Discussion Our work is part of the following larger and important discussion within the NeurIPS community: 2
We got a clear sense of where more clarification would be helpful. To what solution do neural nets (trained w. GD on this network simulates the unnormalized exponentiated gradient algorithm (EGU). Previously it was thought that GD cannot take advantage of the sparsity of the solution. What is the surprising insight?
relevant to the NeurIPS community " (R1) and to be addressing " a well-motivated problem " (R3) in " an important area "
We thank the reviewers for their useful and thoughtful feedback. We are glad to see that our work was found " highly That is, " the experiments show that the proposed method outperforms the baselines and We address the reviewers' comments below and ' we refer to our method for noisy inference. " How does the estimation error [of Will it change the claims of the paper? Thus, our claims are unaffected. While we share the reviewers' desire for convergence guarantees, we also note that Thus, we go from Eq. 7 to Eq. 8 by swapping Re. the noiseless case, see Figure 1 In Eq. 7, the prior density is included in The ablation studies are mentioned in the text, and fully reported in the Supplement (E.3, 'Lesion study').
We thank the reviewers for the thoughtful feedback in these difficult times caused by the global COVID-19 pandemic
We thank the reviewers for the thoughtful feedback in these difficult times caused by the global COVID-19 pandemic. QM9 is used for training, the model must be based on LCAO, and QDF achieved high extrapolation performance. We emphasize that even this LDA-like HK map achieved high extrapolation performance. We will address this in future work. Of course, QDF can be proposed without a comparison to GCN.
reproducibility is of central importance to the whole NeurIPS community and was also unanimously identified during 2
First of all, we thank all reviewers for their valuable time and feedback. We thank the reviewers for pointing out typos and grammatical errors, which we of course have fixed now. We are afraid that the reviewer might have misunderstood some parts of the paper. We refer to the original paper for further details about the approximation of the variational posterior. We have made this more clear in the main paper now.